62 research outputs found
Robust Total Least Mean M-Estimate normalized subband filter Adaptive Algorithm for impulse noises and noisy inputs
When the input signal is correlated input signals, and the input and output
signal is contaminated by Gaussian noise, the total least squares normalized
subband adaptive filter (TLS-NSAF) algorithm shows good performance. However,
when it is disturbed by impulse noise, the TLS-NSAF algorithm shows the rapidly
deteriorating convergence performance. To solve this problem, this paper
proposed the robust total minimum mean M-estimator normalized subband filter
(TLMM-NSAF) algorithm. In addition, this paper also conducts a detailed
theoretical performance analysis of the TLMM-NSAF algorithm and obtains the
stable step size range and theoretical steady-state mean squared deviation
(MSD) of the algorithm. To further improve the performance of the algorithm, we
also propose a new variable step size (VSS) method of the algorithm. Finally,
the robustness of our proposed algorithm and the consistency of theoretical and
simulated values are verified by computer simulations of system identification
and echo cancellation under different noise models
Identifying Tree Preservation Order Protected Trees by Deep Learning in Greater London Area
Tree Preservation Order (TPO) is used to protect specific trees from damage and destruction, which is determined in high subjectivity. This research collected and analyzed TPO data, aerial images, geographic data, and socio-economic data in the Greater London area and developed a multi-input deep learning (DL) framework to classify TPO-protected and non-TPO-protected trees. The synergy use of aerial images and GIS data with the fusion model of ResNet50 and multilayer perceptron network produced the best classification accuracy of 87.32%. The result indicated the robustness of the multi-input DL model to identify the social attributes of trees compared with the single-input DL model
HEDB: An Efficient and Elastic Encrypted Database Via Arithmetic-And-Logic Fully Homomorphic Encryption
As concerns are increasingly raised about data privacy, encrypted database management system (DBMS) based on fully homomorphic encryption (FHE) attracts increasing research attention, as FHE permits DBMS to be directly outsourced to cloud servers without revealing any plaintext data. However, the real-world deployment of FHE-based DBMS faces two main challenges: i) high computational latency, and ii) lack of elastic query processing capability, both of which stem from the inherent limitations of the underlying FHE operators. Here, we introduce HEDB, a fully homomorphically encrypted, efficient and elastic DBMS framework based on a new FHE infrastructure. By proposing and integrating new arithmetic and logic homomorphic operators, we devise fast and high-precision homomorphic comparison and aggregation algorithms that enable a variety of SQL queries to be applied over FHE ciphertexts, e.g., compound filter-aggregation, sorting, grouping, and joining. In addition, in contrast to existing encrypted DBMS that only support aggregated information retrieval, our framework permits further server-side analytical processing over the queried FHE ciphertexts, such as private decision tree evaluation. In the experiment, we rigorously study the efficiency and flexibility of HEDB. We show that, compared to the state-of-the-art techniques,HEDB can homomorphically evaluate end-to-end SQL queries as much as - faster than the state-of-the-art solution, completing a TPC-H query over a 16-bit 10K-row database within 241 seconds
HEIR: A Unified Representation for Cross-Scheme Compilation of Fully Homomorphic Computation
We propose a new compiler framework that automates code generation over multiple fully homomorphic encryption (FHE) schemes. While it was recently shown that algorithms combining multiple FHE schemes (e.g., CKKS and TFHE) achieve high execution efficiency and task utility at the same time, developing fast cross-scheme FHE algorithms for real-world applications generally require heavy hand-tuned optimizations by cryptographic experts, resulting in either high usability costs or low computational efficiency. To solve the usability and efficiency dilemma, we design and implement HEIR, a compiler framework based on multi-level intermediate representation (IR). To achieve cross-scheme compilation of efficient FHE circuits, we develop a two-stage code-lowering structure
based on our custom IR dialects. First, the plaintext program along with the associated data types are converted into FHE-friendly dialects in the transformation stage. Then, in the optimization stage, we apply FHE-specific optimizations to lower the transformed dialect into our bottom-level FHE library operators. In the experiment, we implement the entire software stack for HEIR, and demonstrate that complex end-to-end programs, such as homomorphic K-Means clustering and homomorphic data aggregation in databases, can easily be compiled to run -- faster than the program generated by the state-of-the-art FHE compilers
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities
and unfairness is receiving increasing popular and academic attention. A surge
of recent work has focused on the development of algorithmic tools to assess
and mitigate such unfairness. If these tools are to have a positive impact on
industry practice, however, it is crucial that their design be informed by an
understanding of real-world needs. Through 35 semi-structured interviews and an
anonymous survey of 267 ML practitioners, we conduct the first systematic
investigation of commercial product teams' challenges and needs for support in
developing fairer ML systems. We identify areas of alignment and disconnect
between the challenges faced by industry practitioners and solutions proposed
in the fair ML research literature. Based on these findings, we highlight
directions for future ML and HCI research that will better address industry
practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in
Computing Systems (CHI 2019
Unleashing novel horizons in advanced prostate cancer treatment: investigating the potential of prostate specific membrane antigen-targeted nanomedicine-based combination therapy
Prostate cancer (PCa) is a prevalent malignancy with increasing incidence in middle-aged and older men. Despite various treatment options, advanced metastatic PCa remains challenging with poor prognosis and limited effective therapies. Nanomedicine, with its targeted drug delivery capabilities, has emerged as a promising approach to enhance treatment efficacy and reduce adverse effects. Prostate-specific membrane antigen (PSMA) stands as one of the most distinctive and highly selective biomarkers for PCa, exhibiting robust expression in PCa cells. In this review, we explore the applications of PSMA-targeted nanomedicines in advanced PCa management. Our primary objective is to bridge the gap between cutting-edge nanomedicine research and clinical practice, making it accessible to the medical community. We discuss mainstream treatment strategies for advanced PCa, including chemotherapy, radiotherapy, and immunotherapy, in the context of PSMA-targeted nanomedicines. Additionally, we elucidate novel treatment concepts such as photodynamic and photothermal therapies, along with nano-theragnostics. We present the content in a clear and accessible manner, appealing to general physicians, including those with limited backgrounds in biochemistry and bioengineering. The review emphasizes the potential benefits of PSMA-targeted nanomedicines in enhancing treatment efficiency and improving patient outcomes. While the use of PSMA-targeted nano-drug delivery has demonstrated promising results, further investigation is required to comprehend the precise mechanisms of action, pharmacotoxicity, and long-term outcomes. By meticulous optimization of the combination of nanomedicines and PSMA ligands, a novel horizon of PSMA-targeted nanomedicine-based combination therapy could bring renewed hope for patients with advanced PCa
Efficiency of Finding Muon Track Trigger Primitives in CMS Cathode Strip Chambers
In the CMS Experiment, muon detection in the forward direction is accomplished by cathode strip chambers~(CSC). These detectors identify muons, provide a fast muon trigger, and give a precise measurement of the muon trajectory. There are 468 six-plane CSCs in the system. The efficiency of finding muon trigger primitives (muon track segments) was studied using~36 CMS CSCs and cosmic ray muons during the Magnet Test and Cosmic Challenge~(MTCC) exercise conducted by the~CMS experiment in~2006. In contrast to earlier studies that used muon beams to illuminate a very small chamber area (~m), results presented in this paper were obtained by many installed CSCs operating {\em in situ} over an area of ~m as a part of the~CMS experiment. The efficiency of finding 2-dimensional trigger primitives within 6-layer chambers was found to be~. These segments, found by the CSC electronics within ~ns after the passing of a muon through the chambers, are the input information for the Level-1 muon trigger and, also, are a necessary condition for chambers to be read out by the Data Acquisition System
- …